Dynamic Stochastic Games with Sequential State-to-State Transitions
نویسندگان
چکیده
Discrete-time stochastic games with a finite number of states have been widely applied to study the strategic interactions among forward-looking players in dynamic environments. The model as written down by Ericson & Pakes (1995), Pakes & McGuire (1994, 2001) (hereafter, EP, PM1, and PM2), the subsequent literature (e.g., Gowrisankaran 1999, Fershtman & Pakes 2000, Benkard 2004), and in standard textbook treatments of stochastic games (e.g., Filar & Vrieze 1997, Basar & Olsder 1999) assumes that the states of all players change at exactly the same point in each period (say at the end of the period). That is, the transitions from this period’s state to next period’s state are simultaneous. As PM2 and Doraszelski & Judd (2004) (hereafter, DJ) point out, these games with simultaneous state-to-state transitions suffer from a “curse of dimensionality” since the cost of computing players’ expectations over all possible future states increases exponentially in the number of state variables. However, there are many other ways to formulate dynamic stochastic games, and some of them may be computationally more tractable than others. In particular, we show that there are games with sequential state-to-state transitions that do not suffer from the curse of dimensionality in the expectation over successor states.
منابع مشابه
Closed-form Solutions to a Subclass of Continuous Stochastic Games via Symbolic Dynamic Programming
Zero-sum stochastic games provide a formalism to study competitive sequential interactions between two agents with diametrically opposing goals and evolving state. A solution to such games with discrete state was presented by Littman (Littman, 1994). The continuous state version of this game remains unsolved. In many instances continuous state solutions require nonlinear optimisation, a problem...
متن کاملStrategy recovery for stochastic mean payoff games
We prove that to find optimal positional strategies for stochastic mean payoff games when the value of every state of the game is known, in general, is as hard as solving such games tout court. This answers a question posed by Daniel Andersson and Peter Bro Miltersen. In this note, we consider perfect information 0-sum stochastic games, which, for short, we will just call stochastic games. For ...
متن کاملStochastic Dynamic Programming with Markov Chains for Optimal Sustainable Control of the Forest Sector with Continuous Cover Forestry
We present a stochastic dynamic programming approach with Markov chains for optimal control of the forest sector. The forest is managed via continuous cover forestry and the complete system is sustainable. Forest industry production, logistic solutions and harvest levels are optimized based on the sequentially revealed states of the markets. Adaptive full system optimization is necessary for co...
متن کاملStochastic Games with Unbounded Payoffs: Applications to Robust Control in Economics
We study a discounted maxmin control problem with general state space. The controller is unsure about his model in the sense that he also considers a class of approximate models as possibly true. The objective is to choose a maxmin strategy that will work under a range of different model specifications. This is done by dynamic programming techniques. Under relatively weak conditions, we show th...
متن کاملTraffic Condition Detection in Freeway by using Autocorrelation of Density and Flow
Traffic conditions vary over time, and therefore, traffic behavior should be modeled as a stochastic process. In this study, a probabilistic approach utilizing Autocorrelation is proposed to model the stochastic variation of traffic conditions, and subsequently, predict the traffic conditions. Using autocorrelation of the time series samples of density and flow which are collected from segments...
متن کامل